最先进的命名实体识别(NER)模型在很大程度上依赖于完全注释的培训数据。但是,AC可访问的数据通常是不完全注释的,注释者通常缺乏目标域中的全面知识。通常,默认情况下,未注释的代币被认为是非实体,而我们强调这些令牌可能是任何实体的非实体。在这里,我们使用不完整的带注释数据研究NER mod-Eling,其中只有一部分命名实体是la-bel的,并且未标记的令牌被每个可能的标签都刻有多标签。路径可以分散训练模型从金路径(地面真相标签序列)中分散注意力,从而阻碍了学习能力。在本文中,我们提出了称为自适应顶级助攻的Adak-ner,该模型集中在一个较小的可行重新上,其中黄金路径更有可能被宠爱。我们通过广泛的英语和中文数据集证明了UR方法的优势,平均在2003年的F-评分中可以提高2%的速度,而在两个中文数据集中则超过10%,与先前的最新作品相比。
translated by 谷歌翻译
我们旨在定量衡量医学图像分割模型的实际可用性:可以使用/信任模型的预测在多大程度上,多久和在哪些样品上进行样本。我们首先提出了一个度量,正确的信心等级相关性(CCRC),以捕获预测的置信度估计如何与其正确性分数相关。具有高价值CCRC的模型意味着其预测信心可靠地表明,哪些样本的预测更可能是正确的。由于CCRC没有捕获实际的预测正确性,因此仅仅指示预测模型是否既准确又可靠地用于实践中。因此,我们进一步提出了另一种可用区域估计(URE)的方法,同时量化了预测在一个估计中的置信度评估的正确性和可靠性。 URE提供了有关模型的预测在多大程度上可用的具体信息。此外,可以利用可用区域(UR)的大小来比较模型:具有较大UR的模型可以作为更可用的模型,因此可以将其视为更好的模型。六个数据集的实验验证了所提出的评估方法表现良好,为医学图像分割模型的实际可用性提供了具体和简洁的措施。代码可在https://github.com/yizhezhang2000/ure上提供。
translated by 谷歌翻译
开放式视频对象检测(OVD)旨在扩展词汇大小,以检测训练词汇以外的新颖类别的对象。最近的工作诉诸于预先训练的视觉模型中的丰富知识。但是,现有方法在提案级视觉语言对准方面无效。同时,这些模型通常遭受对基本类别的信心偏见,并且在新颖的类别上表现较差。为了克服挑战,我们提出了Medet,这是一个新颖有效的OVD框架,并具有建议挖掘和预测均衡。首先,我们设计了一个在线建议挖掘,以完善从粗到细的继承的视觉语义知识,从而允许提案级别以检测为导向的特征对齐。其次,基于因果推论理论,我们引入了班级的后门调整,以加强对新类别的预测,以提高整体OVD性能。对可可和LVIS基准的广泛实验验证了MEDET在检测新型类别的对象(例如可可的32.6%AP50)和LVI上的22.4%蒙版图中的优越性。
translated by 谷歌翻译
视觉变压器(VIT)正在改变对象检测方法的景观。 VIT的自然使用方法是用基于变压器的骨干替换基于CNN的骨干,该主链很简单有效,其价格为推理带来了可观的计算负担。更微妙的用法是DEDR家族,它消除了对物体检测中许多手工设计的组件的需求,但引入了一个解码器,要求超长时间进行融合。结果,基于变压器的对象检测不能在大规模应用中占上风。为了克服这些问题,我们提出了一种新型的无解码器基于完全变压器(DFFT)对象检测器,这是第一次在训练和推理阶段达到高效率。我们通过居中两个切入点来简化反对检测到仅编码单级锚点的密集预测问题:1)消除训练感知的解码器,并利用两个强的编码器来保留单层特征映射预测的准确性; 2)探索具有有限的计算资源的检测任务的低级语义特征。特别是,我们设计了一种新型的轻巧的面向检测的变压器主链,该主链有效地捕获了基于良好的消融研究的丰富语义的低级特征。 MS Coco基准测试的广泛实验表明,DFFT_SMALL的表现优于2.5%AP,计算成本降低28%,$ 10 \ $ 10 \乘以$ 10 \乘以$较少的培训时期。与尖端的基于锚的探测器视网膜相比,DFFT_SMALL获得了超过5.5%的AP增益,同时降低了70%的计算成本。
translated by 谷歌翻译
本文提出了一种任何时间的超分辨率方法(ARM),以解决过度参数化的单图像超分辨率(SISR)模型。我们的手臂是由三个观察结果激励的:(1)不同图像贴片的性能随不同大小的SISR网络而变化。 (2)计算开销与重建图像的性能之间存在权衡。 (3)给定输入图像,其边缘信息可以是估计其PSNR的有效选择。随后,我们训练包含不同尺寸的SISR子网的手臂超网,以处理各种复杂性的图像斑块。为此,我们构建了一个边缘到PSNR查找表,该表将图像补丁的边缘分数映射到每个子网的PSNR性能,以及子网的一组计算成本。在推论中,图像贴片单独分配给不同的子网,以获得更好的计算绩效折衷。此外,每个SISR子网都共享手臂超网的权重,因此不引入额外的参数。多个子网的设置可以很好地使SISR模型的计算成本适应动态可用的硬件资源,从而可以随时使用SISR任务。对不同大小的分辨率数据集的广泛实验和流行的SISR网络作为骨架验证了我们的手臂的有效性和多功能性。源代码可在https://github.com/chenbong/arm-net上找到。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
translated by 谷歌翻译